Lecture 12 — November 3 Lecturer : Lester Mackey Scribe

نویسندگان

  • Jae Hyuck Park
  • Christian Fong
چکیده

In the first part of the course, we focused on optimal inference in the setting of point estimation (see Figure 12.2). We formulated this problem in the framework of decision theory and focused on finite sample criteria of optimality. We immediately discovered that uniform optimality was seldom attainable in practice, and thus, we developed our theory of optimality along two lines: constraining and collapsing. To restrict ourselves to interesting subclasses of estimators, we first introduced the notion of unbiasedness, which lead us to UMRUEs/UMVUEs. Then we considered certain symmetry constraints in the context of location invariant decision problems—this was formalized via the concept of equivariance, which led us to MREs. The situation is similar in hypothesis testing: to develop a useful notion of optimality, we will need to impose constraints on tests. These constraints arise in the form of risk bounds, unbiasedness, and equivariance. Another way to achieve optimality was to collapse the risk function. We introduced the notions of average risk (optimized by Bayes estimators) and worst case risk (optimized by minimax estimators). We saw that Bayes estimators have many desirable properties and provide tools to reason about both concepts. We also considered a few different model families such as exponential family and location and scale family. We came out with certain notions of optimality, optimal unbiased estimator was easy to find for exponential family with complete sufficient statistics and for locationscale family, we defined the best equivariant estimators.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Randomness & Computation Fall 2011 Lecture 20 : November 1 Lecturer : Alistair Sinclair Based on scribe notes by :

In this lecture we use Azuma’s inequality to analyze the randomized Quicksort algorithm. Quicksort takes as input a set S of numbers, which can be assumed to be distinct without loss of generality, and sorts the set S as follows: it picks a pivot x ∈ S uniformly at random, then it partitions the set S as Sx = {y ∈ S | y > x}, and recursively sorts Sx . The fo...

متن کامل

2017 Lecture 12 Lecturer : Mohsen Ghaffari Scribe : 1 Minimum Spanning Tree

In this lecture, we discuss a near-optimal distributed algorithm in the CONGEST model for the problem of computing a minimum spanning tree (MST). We note that, over the years, the problem of computing an MST has turned out to have a central role in distributed algorithms for network optimization problems, significantly more central than its role in the centralized algorithms domain. The upper a...

متن کامل

Lecturer : David P . Williamson Scribe : Faisal Alkaabneh

Today we will look at a matrix analog of the standard scalar Chernoff bounds. This matrix analog will be used in the next lecture when we talk about graph sparsification. While we’re more interested in the application of the theorem than its proof, it’s still useful to see the similarities and the differences of moving from the proof of the result for scalars to the same result for matrices. Sc...

متن کامل

Lattices in Computer Science Lecture 8 Dual Lattices Lecturer : Oded Regev Scribe : Gillat Kol

From the above definition, we have the following geometrical interpretation of the dual lattice. For any vector x, the set of all points whose inner product with x is integer forms a set of hyperplanes perpendicular to x and separated by distance 1/‖x‖. Hence, any vector x in a lattice Λ imposes the constraint that all points in Λ∗ lie in one of the hyperplanes defined by x. See the next figure...

متن کامل

Lecture 16 Lecturer: Jonathan Katz Scribe(s): 1 Digital Signature Schemes

• Vrfy takes as input a public key, a message, and a (purported) signature; it outputs a single bit b with b = 1 indicating acceptance and b = 0 indicating rejection. (We assume for simplicity that Vrfy is deterministic.) We write this as b = Vrfypk(m,σ). For correctness, we require that for all (pk, sk) output by Gen(1k), for all messages m, and for all σ output by Signsk(m) we have Vrfypk(m,σ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015